Goto

Collaborating Authors

 Ancona



Granger Components Analysis: Unsupervised learning of latent temporal dependencies

Neural Information Processing Systems

Here the concept of Granger causality is employed to propose a new criterion for unsupervised learning that is appropriate in the case of temporally-dependent source signals. The basic idea is to identify two projections of a multivariate time series such that the Granger causality among the resulting pair of components is maximized.








Constraint- and Score-Based Nonlinear Granger Causality Discovery with Kernels

Murphy, Fiona, Benavoli, Alessio

arXiv.org Machine Learning

Granger causality (GC) [15] is a time series causal discovery framework that uses predictive modeling to identify the underlying causal structure of a time series system. Relying on the assumption that cause precedes effect, GC assesses whether including the lagged information from one time series in the autoregressive model of a second time series enhances its predictions. This improvement indicates a predictive relationship between the time series variables, where one time series provides supplemental information about the future of another time series, thereby signifying the presence of a (Granger) causal relationship. GC requires only observational data, and has been used for time series causal discovery across diverse domains, including climate science [33], political and social sciences [17], econometrics [4], and biological systems studies [13]. The original formulation of GC requires several assumptions to be satisfied for causal identifiability. In regards to the candidate time series system, it is assumed that the time series variables are stationary, and that all variables are observed (absence of latent confounders). GC was initially proposed for bivariate time series systems, but was generalised for the multivariate setting to accommodate the assumption that all relevant variables are included in the analysis [15]. Additional assumptions are made with regard to the types of causal relationships that can be identified within the time series system. GC cannot estimate a causal relationship between time series at an instantaneous time point, relying on the relationship between the lags and predicted values to determine a GC relationship.


Empirical Results for Adjusting Truncated Backpropagation Through Time while Training Neural Audio Effects

Bourdin, Yann, Legrand, Pierrick, Roche, Fanny

arXiv.org Artificial Intelligence

This paper investigates the optimization of Truncated Backpropagation Through Time (TBPTT) for training neural networks in digital audio effect modeling, with a focus on dynamic range compression. The study evaluates key TBPTT hyperparameters -- sequence number, batch size, and sequence length -- and their influence on model performance. Using a convolutional-recurrent architecture, we conduct extensive experiments across datasets with and without conditionning by user controls. Results demonstrate that carefully tuning these parameters enhances model accuracy and training stability, while also reducing computational demands. Objective evaluations confirm improved performance with optimized settings, while subjective listening tests indicate that the revised TBPTT configuration maintains high perceptual quality.